18 research outputs found

    Sign Language Recognition

    Get PDF
    This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data set

    Classification of gesture with layered meanings

    No full text
    Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)2915239-246LNAI

    Deciphering gestures with layered meanings and signer adaptation

    No full text
    10.1109/AFGR.2004.1301592Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition559-56

    Automatic sign language analysis: A survey and the future beyond lexical meaning

    No full text
    10.1109/TPAMI.2005.112IEEE Transactions on Pattern Analysis and Machine Intelligence276873-891ITPI

    A new probabilistic model for recognizing signs with systematic modulations

    No full text
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)4778 LNCS16-3

    Deciphering layered meaning in gestures

    No full text
    Proceedings - International Conference on Pattern Recognition163815-818PICR

    Modeling Layered Meaning with Gesture Parameters

    No full text
    Proceedings of the 7th International Conference on Control, Automation, Robotics and Vision, ICARCV 20021591-159

    Understanding gestures with systematic variations in movement dynamics

    No full text
    10.1016/j.patcog.2006.02.010Pattern Recognition3991633-1648PTNR

    Recognizing two handed gestures with generative, discriminative and ensemble methods via Fisher kernels

    No full text
    Abstract. Use of gestures extends Human Computer Interaction (HCI) possibilities in multimodal environments. However, the great variability in gestures, both in time, size, and position, as well as interpersonal differences, makes the recognition task difficult. With their power in modeling sequence data and processing variable length sequences, modeling hand gestures using Hidden Markov Models (HMM) is a natural extension. On the other hand, discriminative methods such as Support Vector Machines (SVM), compared to model based approaches such as HMMs, have flexible decision boundaries and better classification performance. By extracting features from gesture sequences via Fisher Kernels based on HMMs, classification can be done by a discriminative classifier. We compared the performance of this combined classifier with generative and discriminative classifiers on a small database of two handed gestures recorded with two cameras. We used Kalman tracking of hands from two cameras using center-of-mass and blob tracking. The results show that (i) blob tracking incorporates general hand shape with hand motion and performs better than simple center-of-mass tracking, and (ii) in a stereo camera setup, even if 3D reconstruction is not possible, combining 2D information from each camera at feature level decreases the error rates, (iii) Fisher Score methodology combines the powers of generative and discriminative approaches and increases the classification performance.

    Partially observable markov decision process (POMDP) technologies for sign language based human-computer interaction

    No full text
    10.1007/978-3-642-02713-0_61Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)5616 LNCSPART 3577-58
    corecore